导航菜单
首页 >  240811039 Transfusion Predict the Next Token and Diffuse Images  > Transfusion: Predict the Next Token and Diffuse Images with One Multi

Transfusion: Predict the Next Token and Diffuse Images with One Multi

1 Introduction

Multi-modal generative models need to be able to perceive, process, and produce both discrete elements (such as text or code) and continuous elements (e.g. image, audio, and video data).While language models trained on the next token prediction objective dominate discrete modalities (OpenAI et al., 2024; Dubey et al., 2024), diffusion models (Ho et al., 2020; Rombach et al., 2022a) and their generalizations (Lipman et al., 2022) are the state of the art for generating continuous modalities (Dai et al., 2023; Esser et al., 2024b; Bar-Tal et al., 2024).Many efforts have been made to combine these approaches, including extending a language model to use a diffusion model as a tool, either explicitly (Liu et al., 2023) or by grafting a pretrained diffusion model onto the language model (Dong et al., 2023; Koh et al., 2024).Alternatively, one can quantize the continuous modalities (Van Den Oord et al., 2017) and train a standard language model over discrete tokens (Ramesh et al., 2021; Yu et al., 2022, 2023), simplifying the model’s architecture at the cost of losing information.In this work, we show it is possible to fully integrate both modalities, with no information loss, by training a single model to both predict discrete text tokens and diffuse continuous images.

We introduce Transfusion, a recipe for training a model that can seamlessly generate discrete and continuous modalities.We demonstrate Transfusion by pretraining a transformer model on 50% text and 50% image data using a different objective for each modality: next token prediction for text and diffusion for images.The model is exposed to both modalities and loss functions at each training step.Standard embedding layers convert text tokens to vectors, while patchification layers represent each image as a sequence of patch vectors.We apply causal attention for text tokens and bidirectional attention for image patches.For inference, we introduce a decoding algorithm that combines the standard practices of text generation from language models and image generation from diffusion models.Figure 1 illustrates Transfusion.

In a controlled comparison with Chameleon’s discretization approach (Chameleon Team, 2024), we show that Transfusion models scale better in every combination of modalities.In text-to-image generation, we find that Transfusion exceeds the Chameleon approach at less than a third of the compute, as measured by both FID and CLIP scores.When controlling for FLOPs, Transfusion achieves approximately 2× lower FID scores than Chameleon models.We observe a similar trend in image-to-text generation, where Transfusion matches Chameleon at 21.8% of the FLOPs.Surprisingly, Transfusion is also more efficient at learning text-to-text prediction, achieving perplexity parity on text tasks around 50% to 60% of Chameleon’s FLOPs.

Ablation experiments reveal critical components and potential improvements for Transfusion.We observe that the intra-image bidirectional attention is important, and that replacing it with causal attention hurts text-to-image generation.We also find that adding U-Net down and up blocks to encode and decode images enables Transfusion to compress larger image patches with relatively small loss to performance, potentially decreasing the serving costs by up to 64×.

Finally, we demonstrate that Transfusion can generate images at similar quality to other diffusion models.We train from scratch a 7B transformer enhanced with U-Net down/up layers (0.27B parameters) over 2T tokens: 1T text tokens, and approximately 5 epochs of 692M images and their captions, amounting to another 1T patches/tokens.Figure 2 shows some generated images sampled from the model.On the GenEval (Ghosh et al., 2023) benchmark, our model outperforms other popular models such as DALL-E 2 and SDXL; unlike those image generation models, it can generate text, reaching the same level of performance as Llama 1 on text benchmarks.Our experiments thus show that Transfusion is a promising approach for training truly multi-modal models.

Figure 1: A high-level illustration of Transfusion. A single transformer perceives, processes, and produces data of every modality. Discrete (text) tokens are processed autoregressively and trained on the next token prediction objective. Continuous (image) vectors are processed together in parallel and trained on the diffusion objective. Marker BOI and EOI tokens separate the modalities.((a)) ((b)) ((c)) ((d)) ((e)) ((f)) ((g)) ((h)) ((i)) ((j)) ((k)) ((l)) ((m)) ((n)) ((o)) ((p)) Figure 2: Generated images from a 7B Transfusion trained on 2T multi-modal tokens.

相关推荐: